AI-driven cyberattacks more sophisticated and scalable, but ASU expert offers solutions


Person sitting in front of computer screen with head in their hands with coding swirling around them

Photo illustration courtesy of iStock/Getty Images

|

Cyberattacks used to be engineered by crafty hackers looking to infiltrate computer systems. Artificial intelligence now allows hackers to create a new scale of attacks that penetrate banking, critical infrastructure, intellectual property, and even traffic lights and baby monitors.

With October pegged as Cybersecurity Awareness Month, ASU News turned to Victor Benjamin, assistant professor of information systems in Arizona State University's W. P. Carey School of Business, who has been researching this phenomenon for years.

He said the next generation of AI is troubling, penetrating and has great potential to create societal chaos.

Here’s what Benjamin had to say about AI’s role in cyberattacks and what we can do to stem the tide:

Victor Benjamin
Victor Benjamin

Question: You’ve recently penned several scholarly articles on AI entering the field of cyberattacks. What does that mean for America?

Answer: AI-driven cyberattacks pose a serious challenge to America’s cybersecurity infrastructure. As AI technologies become more accessible, they allow even those with minimal technical skills to carry out sophisticated attacks on everything from financial systems to critical infrastructure like power grids.

This is not entirely new; low-skill hackers called “script kiddies” have always existed. But AI allows these actors to create even more sophisticated, wide-scale attacks. Integrating AI into cybercrime introduces a scale and efficiency previously unseen, forcing security professionals to strengthen defenses rapidly. The ease with which these tools can be misused raises concerns about national security, requiring both public and private sectors to invest heavily in countermeasures.

In essence, AI democratization, while beneficial for innovation, has also provided bad actors with powerful tools, meaning America must double its efforts in cybersecurity, not just in preventing attacks but in adapting its legal frameworks and policies to tackle AI-related threats. It has been a double-edged sword.

Q: These AI attacks sound more severe than cyberattacks in the past. How does the threat change?

A: The threat posed by AI-driven cyberattacks significantly differs from traditional cyberattacks in terms of sophistication and scalability. AI allows attackers to target a wide variety of systems, each with unique vulnerabilities, all at once. What once required specialized expertise can now be done by novices using AI tools, making it easier for more people to launch attacks. This has led to a broader threat landscape where cyberattacks are not only more frequent but also potentially more dangerous due to their automated nature.

AI’s ability to generate convincing phishing scams using large language models (LLMs) heightens the risk for individuals and organizations. Traditional phishing attacks often have limitations in quality and variation, but AI can create highly authentic, diverse and personalized phishing content at scale, making these attacks much harder to identify. The combination of scalability, accessibility for novice attackers and the ability to automate complex attacks makes AI-driven threats far more severe than those of the past.

In short, attackers can leverage AI to produce higher-quality attacks at a greater rate than seen previously.

Q: What were some of the most troubling finds in your research?

A: One of the most troubling findings in my research is the rise of AI-powered tools specifically designed to cause harm. Within the dark net, many hackers are creating AI-powered hacking tools that they sell or give away to others. Some tools are powered by large language models and can help hackers generate phishing scams, malware and violent content with ease. Even more concerning is the use of AI to create deepfakes, including child pornography, which can be created and distributed with minimal effort now.

Another disturbing aspect is the potential for AI-driven attacks on physical systems. The "Flipper Zero" is just one example of how individuals can gain access to a device that affords the capability for hacking cyberphysical systems. Some hackers are using LLMs to generate hacking scripts that can be launched from the Flipper Zero device. This device and others like it can be used to manipulate real-world infrastructure like traffic lights, cars or even power grids, making the threat of physical harm from AI-driven cyberattacks a growing concern.

Q: Cyberattacks always seem to be coming at us, continually placing us on our back feet. Will there ever be a time when we can be proactive as opposed to reactive?

A: A proactive stance in cybersecurity is becoming increasingly possible due to AI. Traditionally, cybersecurity has been reactive, a game of resolving vulnerabilities as they are discovered. However, AI allows us to shift towards a more proactive approach.

By continuously monitoring networks and identifying potential threats in real time, AI can detect patterns and anomalies that human analysts might miss. While some of these strategies have been employed in the past, AI allows us to supercharge the same strategies to work a lot better, given enhanced pattern-detection capabilities. This allows for quicker responses to emerging threats, as well as detecting more types of threats, reducing the window for damage.

Companies are already deploying AI to track bots, detect phishing attempts and compile databases of new malware. Not every threat will be trackable, and many cyberthreats may rely on social engineering/exploiting people rather than computer systems. But, with the right tools, cybersecurity teams can get ahead of the curve on at least some potential attacks and neutralize threats before they cause significant harm.

Q: Given what you just said, what are the best practices we can take to prevent an AI or regular cyberattack?

A: Cyberattacks will continue to occur and will never go away. But we can do some things to reduce our threat exposure. 

The best approach is a combination of advanced tools coupled with cybersecurity education to help individuals become more mindful and secure. AI-powered cybersecurity tools can leverage enhanced pattern-detection capabilities to identify network intrusions, novel malware and other threats. These systems can flag unusual activity and respond much faster than human analysts, making it harder for hackers to exploit weaknesses that they discover. At the same time, regular system updates and patching of known vulnerabilities remain fundamental practices for maximizing cybersecurity defense.

However, many cyberattacks today can involve social engineering to expose network vulnerabilities, and that is where education can help mitigate some risks. Educating users to recognize phishing attempts or suspicious behavior is critical. Organizations should adopt cybersecurity training policies if they have not already. Further, I do not think it would be a stretch to include topics on mindful computer use and cybersecurity practices in K–12 education. These types of risks are just part of life now and so knowledge of them should be generalized. 

More Science and technology

 

Palo Verde Blooms

ASU alum leads study proposing stellar possibility: Could life exist below Mars ice?

Scientists have yet to find evidence of life on Mars, but a new NASA study, whose lead author is an Arizona State University alumnus, proposes microbes could find a potential home beneath frozen…

A portrait of Ross Maciejewski

ASU computing school director honored with prestigious award

Today, in an increasingly complex world, lawmakers and leaders must make big decisions, and they must often do so very quickly.Because of this, data visualization is an increasingly important…

AI-generated image of materials in glass containers

Discovering new materials using AI and machine learning

The United States, in recent years, has been struggling with a supply shortage of critical materials needed for advanced materials discovery and manufacturing. Widespread delays have impacted sectors…